UK to pilot world-leading approach to improve ethical adoption of AI in healthcare
The NHS in England will begin a world leading pilot into Algorithmic Impact Assessments (AIAs) in healthcare to eradicate biases in algorithms.
- NHS in England to lead the world’s first pilot in Algorithmic Impact Assessments (AIAs) in healthcare
- New process will ensure potential risks such as algorithm biases must be addressed before they can access NHS data
- Latest move in efforts to eradicate health inequalities by tackling biases in systems which underpin future health and care services
Biases in artificial intelligence will aim to be eradicated in a world first as the NHS in England trials a new approach to the ethical adoption of AI in healthcare.
AIAs designed by the Ada Lovelace Institute will be piloted to support researchers and developers to assess the possible risks and biases of AI systems to patients and the public before they can access NHS data.
While artificial intelligence has the potential to support health and care workers to deliver better care for people, it could also exacerbate existing health inequalities if concerns such as algorithmic bias aren’t accounted for.
Innovation Minister Lord Kamall said:
While AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations as part of our mission to eradicate health disparities.
This pilot once again demonstrates the UK is at the forefront of adopting new technologies in a way that is ethical and patient-centred.
By allowing us to proactively address risks and biases in systems which will underpin the health and care of the future, we are ensuring we create a system of healthcare which works for everyone, no matter who you are or where you are from.
This complements ongoing work from the ethics team at the NHS AI Lab on ensuring datasets for training and testing AI systems are diverse and inclusive. Taken together, this will result in better health outcomes for everyone, and in particular minority groups.
To ensure best practises are embedded in future technologies, the NHS will support researchers and developers to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns. Supporting patient and public involvement as part of the development process will lead to improvements in patient experience and the clinical integration of AI.
It is hoped that in the future, AIAs could increase the transparency, accountability and legitimacy for the use of AI in healthcare.
Brhmie Balaram, Head of AI Research & Ethics at the NHS AI Lab, said:
Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.
The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.
Following a commission from the NHS AI Ethics Lab, the Ada Lovelace Institute has today published their research which maps out a detailed, step-by-step process for using AIAs in the real-world. It is designed to help developers and researchers consider and account for the potential impacts of proposed technologies on people, society and the environment.
Octavia Reeve, Interim Lead, Ada Lovelace Institute, said:
Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit.
We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.
The NHS AI Lab introduced the AI Ethics Initiative to support research and practical interventions that complement existing efforts to validate, evaluate and regulate AI-driven technologies in health and care, with a focus on countering health inequalities.
Background
The NHS is set to trial the use of AIAs as part of the work of the NHS AI Lab.
It will be trialled across a number of the Lab’s initiatives and used as part of the data access process for the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).
The NCCID is a centralised database that supports researchers to better understand COVID-19 and develop technology that enables the best care for patients hospitalised with a severe infection. The proposed NMIP will expand on the NCCID and enable the training and testing of screening and diagnostic AI.